Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Time course of melody recognition: a gating paradigm study.

Identifieur interne : 001B06 ( Main/Exploration ); précédent : 001B05; suivant : 001B07

Time course of melody recognition: a gating paradigm study.

Auteurs : Simone Dalla Bella [Canada] ; Isabelle Peretz ; Neil Aronoff

Source :

RBID : pubmed:14674630

Descripteurs français

English descriptors

Abstract

Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.

DOI: 10.3758/bf03194831
PubMed: 14674630


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Time course of melody recognition: a gating paradigm study.</title>
<author>
<name sortKey="Bella, Simone Dalla" sort="Bella, Simone Dalla" uniqKey="Bella S" first="Simone Dalla" last="Bella">Simone Dalla Bella</name>
<affiliation wicri:level="1">
<nlm:affiliation>University of Montreal, Montreal, Quebec, Canada. dalla-bella.2@osu.edu</nlm:affiliation>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>University of Montreal, Montreal, Quebec</wicri:regionArea>
<wicri:noRegion>Quebec</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Peretz, Isabelle" sort="Peretz, Isabelle" uniqKey="Peretz I" first="Isabelle" last="Peretz">Isabelle Peretz</name>
</author>
<author>
<name sortKey="Aronoff, Neil" sort="Aronoff, Neil" uniqKey="Aronoff N" first="Neil" last="Aronoff">Neil Aronoff</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2003">2003</date>
<idno type="RBID">pubmed:14674630</idno>
<idno type="pmid">14674630</idno>
<idno type="doi">10.3758/bf03194831</idno>
<idno type="wicri:Area/Main/Corpus">001B11</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">001B11</idno>
<idno type="wicri:Area/Main/Curation">001B11</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">001B11</idno>
<idno type="wicri:Area/Main/Exploration">001B11</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Time course of melody recognition: a gating paradigm study.</title>
<author>
<name sortKey="Bella, Simone Dalla" sort="Bella, Simone Dalla" uniqKey="Bella S" first="Simone Dalla" last="Bella">Simone Dalla Bella</name>
<affiliation wicri:level="1">
<nlm:affiliation>University of Montreal, Montreal, Quebec, Canada. dalla-bella.2@osu.edu</nlm:affiliation>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>University of Montreal, Montreal, Quebec</wicri:regionArea>
<wicri:noRegion>Quebec</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Peretz, Isabelle" sort="Peretz, Isabelle" uniqKey="Peretz I" first="Isabelle" last="Peretz">Isabelle Peretz</name>
</author>
<author>
<name sortKey="Aronoff, Neil" sort="Aronoff, Neil" uniqKey="Aronoff N" first="Neil" last="Aronoff">Neil Aronoff</name>
</author>
</analytic>
<series>
<title level="j">Perception & psychophysics</title>
<idno type="ISSN">0031-5117</idno>
<imprint>
<date when="2003" type="published">2003</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Acoustic Stimulation (MeSH)</term>
<term>Adult (MeSH)</term>
<term>Attention (MeSH)</term>
<term>Auditory Perception (MeSH)</term>
<term>Female (MeSH)</term>
<term>Humans (MeSH)</term>
<term>Male (MeSH)</term>
<term>Mental Recall (MeSH)</term>
<term>Music (MeSH)</term>
<term>Practice, Psychological (MeSH)</term>
<term>Psychoacoustics (MeSH)</term>
<term>Reaction Time (MeSH)</term>
<term>Set, Psychology (MeSH)</term>
</keywords>
<keywords scheme="KwdFr" xml:lang="fr">
<term>Adulte (MeSH)</term>
<term>Attention (MeSH)</term>
<term>Femelle (MeSH)</term>
<term>Humains (MeSH)</term>
<term>Musique (MeSH)</term>
<term>Mâle (MeSH)</term>
<term>Perception auditive (MeSH)</term>
<term>Psychoacoustique (MeSH)</term>
<term>Rappel mnésique (MeSH)</term>
<term>Stimulation acoustique (MeSH)</term>
<term>Temps de réaction (MeSH)</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Acoustic Stimulation</term>
<term>Adult</term>
<term>Attention</term>
<term>Auditory Perception</term>
<term>Female</term>
<term>Humans</term>
<term>Male</term>
<term>Mental Recall</term>
<term>Music</term>
<term>Practice, Psychological</term>
<term>Psychoacoustics</term>
<term>Reaction Time</term>
<term>Set, Psychology</term>
</keywords>
<keywords scheme="MESH" xml:lang="fr">
<term>Adulte</term>
<term>Attention</term>
<term>Femelle</term>
<term>Humains</term>
<term>Musique</term>
<term>Mâle</term>
<term>Perception auditive</term>
<term>Psychoacoustique</term>
<term>Rappel mnésique</term>
<term>Stimulation acoustique</term>
<term>Temps de réaction</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="MEDLINE" Owner="NLM">
<PMID Version="1">14674630</PMID>
<DateCompleted>
<Year>2004</Year>
<Month>02</Month>
<Day>03</Day>
</DateCompleted>
<DateRevised>
<Year>2019</Year>
<Month>12</Month>
<Day>10</Day>
</DateRevised>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">0031-5117</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>65</Volume>
<Issue>7</Issue>
<PubDate>
<Year>2003</Year>
<Month>Oct</Month>
</PubDate>
</JournalIssue>
<Title>Perception & psychophysics</Title>
<ISOAbbreviation>Percept Psychophys</ISOAbbreviation>
</Journal>
<ArticleTitle>Time course of melody recognition: a gating paradigm study.</ArticleTitle>
<Pagination>
<MedlinePgn>1019-28</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Bella</LastName>
<ForeName>Simone Dalla</ForeName>
<Initials>SD</Initials>
<AffiliationInfo>
<Affiliation>University of Montreal, Montreal, Quebec, Canada. dalla-bella.2@osu.edu</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Peretz</LastName>
<ForeName>Isabelle</ForeName>
<Initials>I</Initials>
</Author>
<Author ValidYN="Y">
<LastName>Aronoff</LastName>
<ForeName>Neil</ForeName>
<Initials>N</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>Percept Psychophys</MedlineTA>
<NlmUniqueID>0200445</NlmUniqueID>
<ISSNLinking>0031-5117</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName UI="D000161" MajorTopicYN="N">Acoustic Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D000328" MajorTopicYN="N">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001288" MajorTopicYN="Y">Attention</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001307" MajorTopicYN="Y">Auditory Perception</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D005260" MajorTopicYN="N">Female</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D008297" MajorTopicYN="N">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D011939" MajorTopicYN="Y">Mental Recall</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009146" MajorTopicYN="Y">Music</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D011214" MajorTopicYN="N">Practice, Psychological</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D011571" MajorTopicYN="N">Psychoacoustics</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D011930" MajorTopicYN="Y">Reaction Time</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D012718" MajorTopicYN="N">Set, Psychology</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="pubmed">
<Year>2003</Year>
<Month>12</Month>
<Day>17</Day>
<Hour>5</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2004</Year>
<Month>2</Month>
<Day>5</Day>
<Hour>5</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2003</Year>
<Month>12</Month>
<Day>17</Day>
<Hour>5</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">14674630</ArticleId>
<ArticleId IdType="doi">10.3758/bf03194831</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Canada</li>
</country>
</list>
<tree>
<noCountry>
<name sortKey="Aronoff, Neil" sort="Aronoff, Neil" uniqKey="Aronoff N" first="Neil" last="Aronoff">Neil Aronoff</name>
<name sortKey="Peretz, Isabelle" sort="Peretz, Isabelle" uniqKey="Peretz I" first="Isabelle" last="Peretz">Isabelle Peretz</name>
</noCountry>
<country name="Canada">
<noRegion>
<name sortKey="Bella, Simone Dalla" sort="Bella, Simone Dalla" uniqKey="Bella S" first="Simone Dalla" last="Bella">Simone Dalla Bella</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001B06 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 001B06 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:14674630
   |texte=   Time course of melody recognition: a gating paradigm study.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:14674630" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021